Nowadays, with the rapid development of the Internet, the era of big data has come. The Internet generates huge amounts of data every day. However, extracting meaningful information from massive data is like looking for a needle in a haystack. Data mining techniques can provide various feasible methods to solve this problem. At present, many sequential rule mining (SRM) algorithms are presented to find sequential rules in databases with sequential characteristics. These rules help people extract a lot of meaningful information from massive amounts of data. How can we achieve compression of mined results and reduce data size to save storage space and transmission time? Until now, there has been little research on the compression of SRM. In this paper, combined with the Minimum Description Length (MDL) principle and under the two metrics (support and confidence), we introduce the problem of compression of SRM and also propose a solution named ComSR for MDL-based compressing of sequential rules based on the designed sequential rule coding scheme. To our knowledge, we are the first to use sequential rules to encode an entire database. A heuristic method is proposed to find a set of compact and meaningful sequential rules as much as possible. ComSR has two trade-off algorithms, ComSR_non and ComSR_ful, based on whether the database can be completely compressed. Experiments done on a real dataset with different thresholds show that a set of compact and meaningful sequential rules can be found. This shows that the proposed method works.
translated by 谷歌翻译
作为重要的数据挖掘技术,高公用事业项目集挖掘(HUIM)用于找出有趣但隐藏的信息(例如,利润和风险)。 HUIM已广泛应用于许多应用程序方案,例如市场分析,医疗检测和网络点击流分析。但是,大多数以前的HUIM方法通常忽略项目集中项目之间的关系。因此,在Huim中发现了许多无关的组合(例如,\ {Gold,Apple \}和\ {笔记本,书籍\})。为了解决这一限制,已经提出了许多算法来开采相关的高公用事业项目集(Cohuis)。在本文中,我们提出了一种新型算法,称为Itemset实用性最大化,相关度量(COIUM),该算法既考虑较强的相关性,又考虑了项目的有利可图。此外,新型算法采用数据库投影机制来降低数据库扫描的成本。此外,利用了两种上限和四种修剪策略来有效修剪搜索空间。并使用一个名为“实用程序”的简洁阵列结构来计算和存储在线性时间和空间中所采用的上限。最后,对密集和稀疏数据集的广泛实验结果表明,在运行时和内存消耗方面,COIUM显着优于最新算法。
translated by 谷歌翻译
高实用项目集挖掘方法从大量时间数据中发现隐藏的模式。但是,高实用性项目集挖掘的一个不可避免的问题是,其发现的结果隐藏了模式的数量,这会导致可解释性差。结果仅反映了客户的购物趋势,这无法帮助决策者量化收集的信息。用语言术语,计算机使用精确形式化的数学或编程语言,但是人类使用的语言总是模棱两可的。在本文中,我们提出了一种新型的一相时间模糊实用程序集挖掘方法,称为TFUM。它修改了时间模糊列表,以减少有关内存中潜在的高时间模糊实用程序集的重要信息,然后在短时间内发现一套完整的真正有趣模式。特别是,其余的度量是本文的时间模糊实用程序项目集挖掘域中首次采用的措施。剩余的最大时间模糊效用比以前所采用的研究更紧密,更强。因此,它在修剪TFUM的搜索空间中起着重要作用。最后,我们还评估了TFUM对各种数据集的效率和有效性。广泛的实验结果表明,在运行时成本,内存使用和可扩展性方面,TFUM优于最先进的算法。此外,实验证明,其余的措施可以在采矿过程中显着修剪不必要的候选人。
translated by 谷歌翻译
对于应用智能,公用事业驱动的模式发现算法可以识别数据库中有见地和有用的模式。但是,在这些用于模式发现的技术中,模式的数量可能很大,并且用户通常只对其中一些模式感兴趣。因此,有针对性的高实数项目集挖掘已成为一个关键的研究主题,其目的是找到符合目标模式约束而不是所有模式的模式的子集。这是一项具有挑战性的任务,因为在非常大的搜索空间中有效找到量身定制的模式需要有针对性的采矿算法。已经提出了一种称为Targetum的第一种算法,该算法采用了类似于使用树结构进行后处理的方法,但是在许多情况下,运行时间和内存消耗都不令人满意。在本文中,我们通过提出一种带有模式匹配机制的新型基于列表的算法(名为Thuim(有针对性的高实用项目集挖掘))来解决此问题,该机制可以在挖掘过程中迅速匹配高实用项,以选择目标模式。在不同的数据集上进行了广泛的实验,以将所提出算法的性能与最新算法进行比较。结果表明,THUIM在运行时和内存消耗方面表现良好,并且与Targetum相比具有良好的可扩展性。
translated by 谷歌翻译
分析序列数据通常导致有趣模式的发现,然后是异常检测。近年来,已经提出了许多框架和方法来发现序列数据中有趣的模式以及检测异常行为。然而,现有的算法主要专注于频率驱动的分析,并且它们是在现实世界的环境中应用的具有挑战性。在这项工作中,我们展示了一个名为Duos的新的异常检测框架,可以从一组序列中发现实用程序感知异常顺序规则。在基于模式的异常检测算法中,我们纳入了一个组的异常度和实用程序,然后介绍了实用程序感知异常序列规则(UOSR)的概念。我们表明这是一种检测异常的更有意义的方式。此外,我们提出了一些有效的修剪策略w.r.t.用于挖掘UOSR的上限,以及异常检测。在若干现实世界数据集上进行了广泛的实验研究表明,所提出的Duos算法具有更好的有效性和效率。最后,DUOS优于基线算法,具有合适的可扩展性。
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
Unlike traditional distributed machine learning, federated learning stores data locally for training and then aggregates the models on the server, which solves the data security problem that may arise in traditional distributed machine learning. However, during the training process, the transmission of model parameters can impose a significant load on the network bandwidth. It has been pointed out that the vast majority of model parameters are redundant during model parameter transmission. In this paper, we explore the data distribution law of selected partial model parameters on this basis, and propose a deep hierarchical quantization compression algorithm, which further compresses the model and reduces the network load brought by data transmission through the hierarchical quantization of model parameters. And we adopt a dynamic sampling strategy for the selection of clients to accelerate the convergence of the model. Experimental results on different public datasets demonstrate the effectiveness of our algorithm.
translated by 谷歌翻译
In the new era of personalization, learning the heterogeneous treatment effect (HTE) becomes an inevitable trend with numerous applications. Yet, most existing HTE estimation methods focus on independently and identically distributed observations and cannot handle the non-stationarity and temporal dependency in the common panel data setting. The treatment evaluators developed for panel data, on the other hand, typically ignore the individualized information. To fill the gap, in this paper, we initialize the study of HTE estimation in panel data. Under different assumptions for HTE identifiability, we propose the corresponding heterogeneous one-side and two-side synthetic learner, namely H1SL and H2SL, by leveraging the state-of-the-art HTE estimator for non-panel data and generalizing the synthetic control method that allows flexible data generating process. We establish the convergence rates of the proposed estimators. The superior performance of the proposed methods over existing ones is demonstrated by extensive numerical studies.
translated by 谷歌翻译